Record: LeakyReLU² + Legal Score-First TTT + Parallel Muon — val_bpb 1.1194 (3-seed mean)#549
Merged
valerio-oai merged 3 commits intoopenai:mainfrom Mar 24, 2026
Conversation
…ed mean) LeakyReLU(0.5)² activation (-0.003 vs relu²) + legal score-first TTT (PR openai#461 recipe, 3ep SGD, all blocks unfrozen) + BigramHash(1536) on openai#414 stack with Parameter Banking + Parallel Muon (PR openai#399). 3-seed results: Seed 1337: 1.1192 bpb, 410s TTT, 15.98 MB Seed 42: 1.1200 bpb, 408s TTT, 15.88 MB Seed 2025: 1.1189 bpb, 408s TTT, 15.99 MB Mean: 1.1194 (std 0.0006) All artifacts under 16MB. All eval under 10 min. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
f6a0b0d to
8ff3e0e
Compare
ADIITJ
added a commit
to ADIITJ/parameter-golf
that referenced
this pull request
Mar 23, 2026
11L, XSA all layers, partial RoPE 16/64, LN scale, VE128 (layers 9,10), LeakyReLU(0.5)² activation, BigramHash(2048), INT6+zstd-22. Legal score-first TTT: 32K chunks, all blocks, SGD(0.002,mom=0.9), 3ep. Base: PR openai#503 (EthanYangTW) + LeakyReLU² from openai#518/openai#549 + SGD from openai#549. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
anthony-maio
added a commit
to anthony-maio/parameter-golf
that referenced
this pull request
Mar 24, 2026
Multiple top PRs (openai#535, openai#549, openai#569) demonstrate -0.0015 to -0.003 bpb from this change. LeakyReLU preserves gradient flow through negative pre-activations while maintaining the sparsity/gating benefits of squaring. At 22M params, dead neurons from hard ReLU are expensive. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Contributor
|
Looks legal, clears the 0.005 nats test, so merging into the leaderboard. Well done! |
valerio-oai
approved these changes
Mar 24, 2026
Contributor
Author
ayeee |
Contributor
Author
|
@valerio-oai just noticed there's a wrong user name in the leaderboard. |
Rajat123456789
added a commit
to Rajat123456789/parameter-golf
that referenced
this pull request
Mar 24, 2026
Four novel improvements over PR openai#549 (1.1194 BPB) base: - Full GPTQ quantization with Hessian-guided error compensation - Soft-round QAT with tanh-based temperature annealing - LoRA-based test-time training (rank-8 adapters on Q/K/V/O) - Entropy-coded compression (Huffman+LZMA adaptive selection) Made-with: Cursor
senstar-hsoleimani
added a commit
to senstar-hsoleimani/parameter-golf
that referenced
this pull request
Mar 24, 2026
Track: 10min_16mb Based on: PR openai#549 (LeakyReLU+ParallelMuon), PR openai#606 (Soft-Round+AdamW TTT), PR openai#609 (XSA-all+Full GPTQ) Changes from SOTA (openai#549): - XSA on all 11 layers (was 4) - Soft-Round QAT with tanh-based differentiable rounding (alpha 1->16) - Full GPTQ with Hessian-aware column-reordered Cholesky error compensation - MHA 8/8 (was GQA 8/4) - MLP 3.5x expansion (1792 hidden, was 3.0x/1536) - BigramHash vocabulary 8192 (was 2048) - AdamW TTT with grouped LR and cosine schedule (was SGD) - Early QAT threshold 0.5 (was late 0.15) - Selective ±1 magnitude pruning to hit size target
Contributor
|
whoops, really sorry about the wrong username -- I thought something looked wrong! Fixing it now |
sunnypatneedi
added a commit
to sunnypatneedi/parameter-golf
that referenced
this pull request
Mar 24, 2026
Run 0: PR openai#549 UNMODIFIED (merged SOTA 1.1194, verified 3-seed) Run 1: PR openai#549 + TTT_ENABLED=1 + TTT_LR=0.0005 (2 lines changed) Both have FA3→FA2→SDPA fallback for non-Hopper GPUs. Following retro: one change per run, baseline first. Expected: Run 1 should achieve ~1.094-1.104 (beats 1.1144 target). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
sunnypatneedi
pushed a commit
to sunnypatneedi/parameter-golf
that referenced
this pull request
Mar 24, 2026
Documents merged SOTA of 1.1194 (PR openai#549, LeakyReLU² + Legal TTT + Parallel Muon), confirmed technique deltas, enforcement ruling on GPTQ calibration, and the path forward to beat 1.1144. https://claude.ai/code/session_01U3LXGzTkedd9ZcHF2qgW7d
sunnypatneedi
added a commit
to sunnypatneedi/parameter-golf
that referenced
this pull request
Mar 24, 2026
Run 0: PR openai#549 UNMODIFIED (merged SOTA 1.1194, verified 3-seed) Run 1: PR openai#549 + TTT_ENABLED=1 + TTT_LR=0.0005 (2 lines changed) Both have FA3→FA2→SDPA fallback for non-Hopper GPUs. Following retro: one change per run, baseline first. Expected: Run 1 should achieve ~1.094-1.104 (beats 1.1144 target). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
RichiiiTV
pushed a commit
to RichiiiTV/parameter-golf
that referenced
this pull request
Mar 24, 2026
abaybektursun
added a commit
to abaybektursun/parameter-golf
that referenced
this pull request
Mar 24, 2026
Case study: reordering training shards by model difficulty (hardest first) gives -0.0033 BPB improvement over sequential ordering. Zero architecture changes, zero compute cost, ten lines of code. Key finding: token-level statistics (KL divergence) find 0.0009 range across shards. Model perplexity finds 0.0475 range -- 100x more variation. The two metrics are uncorrelated (r = -0.056). 3-seed validated on PR openai#549 (merged openai#1): Seed 1337: 1.1217 -> 1.1183 (-0.0034) Seed 42: 1.1222 -> 1.1181 (-0.0041) Seed 2025: 1.1221 -> 1.1198 (-0.0023) Mean: 1.1220 -> 1.1187 (-0.0033) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Closed
3 tasks
4 tasks
manfromnowhere143
added a commit
to manfromnowhere143/parameter-golf
that referenced
this pull request
Mar 26, 2026
Full stack: 11L LeakyReLU(0.5)² + XSA4 + Partial RoPE + LN Scale + EMA + Parallel Muon + GPTQ-lite int6 + Legal TTT + N-gram Oracle Cache. Base: PR openai#549 lineage (1.1194 BPB leaderboard openai#1). Addition: Vectorized bigram cache with entropy-adaptive neural/n-gram mixing. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
lolrazh
added a commit
to lolrazh/parameter-golf
that referenced
this pull request
Mar 26, 2026
TTT_ENABLED defaulted to 0 (off) and TTT_FREEZE_BLOCKS defaulted to 2 in train_gpt.py. SOTA PR openai#549 runs with all blocks unfrozen. Without these, the prod submission would skip TTT entirely and freeze 2 blocks during eval — losing the biggest single BPB gain. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
7 tasks
filipviz
added a commit
to filipviz/parameter-golf
that referenced
this pull request
Mar 26, 2026
… baseline Starting from the current frontier for structural hyperparameter search.
RoyiRa
added a commit
to RoyiRa/parameter-golf
that referenced
this pull request
Mar 26, 2026
TTT_EPOCHS=1 with order=7, alpha=0.60, min_count=1. Fully defensible (matches merged PR openai#549 TTT pattern). Only 0.011 worse than 4-epoch V30b (0.8901). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
lolrazh
added a commit
to lolrazh/parameter-golf
that referenced
this pull request
Mar 26, 2026
…9958 (3-seed mean) 3-seed mean: 0.9958 BPB (std 0.0017). Seeds 1337/42/2025: 0.9977/0.9947/0.9949. Built on PR openai#549 stack + three additions: - Backward-looking 7-gram eval cache (alpha=0.2, score-first, ~98% hit rate) - Entropy-regularized QAT (halves quant gap: 0.009 vs 0.017) - Mixed int5/int6 quantization (front3_back1_6_middle5) + per-row GPTQ-lite - LeakyReLU(0.9)² (+0.013 BPB vs 0.5 slope) All artifacts under 16MB (~14.0 MB). All eval under 10 min (~552s TTT+ngram). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
6 tasks
kasimte
pushed a commit
to kasimte/parameter-golf
that referenced
this pull request
Mar 26, 2026
Fraser-Greenlee
added a commit
to Fraser-Greenlee/parameter-golf
that referenced
this pull request
Mar 26, 2026
- Replace train_gpt_lookahead.py with SOTA train_gpt.py (abaybektursun's LeakyReLU² + Parameter Banking + Legal TTT from PR openai#549) - Update CLAUDE.md with SOTA source path and new run command - Add E17-E22 to EXPERIMENT_LOG.md: - E17: int8 for mlp_down quantization (memory-saving) - E18: extend VE to layers 7-8 (performance) - E19: bigger bigram table 4096/8192 (performance) - E20: remove/freeze layer 0 attention (memory-saving) - E21: analysis-informed MLP profile (performance) - E22: depth recurrence for middle layers (memory-saving) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
abaybektursun
added a commit
to abaybektursun/parameter-golf
that referenced
this pull request
Mar 26, 2026
- Base model is ValCalib GPTQ (1.1142 BPB), not PR openai#549 (1.1194) - Remove stale "not yet deployed" / "we estimate" for EXP-11 - Note α=0.80 (939s) exceeds 600s budget - Fix PR openai#727 score to 0.9674, PR openai#788 to 0.9059 - Fix PR openai#596 BPB to 0.6430 - "Approved" → "Technique deemed legal" for closed PRs - Add bucket sweep and per-token overhead proposal - Replace "neural" with "base LM" throughout Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
sunnypatneedi
added a commit
to sunnypatneedi/parameter-golf
that referenced
this pull request
Mar 26, 2026
3-seed mean 0.8609 bpb (42→0.8600, 1337→0.8611, 2025→0.8616). All artifacts under 16MB. 11-gram n-gram cache with entropy-adaptive alpha and Hedge Mixer on PR openai#549 base architecture. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Hilo-Hilo
added a commit
to Hilo-Hilo/parameter-golf
that referenced
this pull request
Mar 27, 2026
- Merged SkyPilot/Shadeform dispatch backend - Copied PR openai#549 SOTA train_gpt.py (LeakyReLU + Legal TTT + Parallel Muon, 1.1194 bpb) to repo root as the base for iteration - Saved stock trainer as train_gpt_stock.py - Updated worker_program.md: 8xH100 official track mode, SOTA improvement directions (GPTQ-lite, EMA, partial RoPE, XSA, QAT) - Reset node tree for fresh swarm start
nvemuri4649
pushed a commit
to thanushpatlolla/parameter-golf
that referenced
this pull request
Mar 27, 2026
…u-legal-ttt-1.1183 Record: LeakyReLU² + Legal Score-First TTT + Parallel Muon — val_bpb 1.1194 (3-seed mean)
autocode-rayes
added a commit
to autocode-rayes/parameter-golf
that referenced
this pull request
Mar 27, 2026
Three changes on PR openai#549 stack: - XSA on all 11 layers (was last 4) - Manifold-constrained hyper-connections (22 extra params) - Full-training QAT (LATE_QAT_THRESHOLD=1.0) Seed 1337: sliding_window=1.1229, legal_ttt=1.1211 Artifact: 15.95 MB, 8xH100 SXM, 600s train + 482s eval Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Hilo-Hilo
added a commit
to Hilo-Hilo/parameter-golf
that referenced
this pull request
Mar 27, 2026
The SOTA PR openai#549 train_gpt.py uses flash_attn_interface (FA3) which requires building from source on Hopper GPUs. Add graceful fallback: FA3 -> FA2 -> PyTorch scaled_dot_product_attention (SDPA). SDPA is ~10-15% slower than FA3 but works everywhere without special builds.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Record: LeakyReLU² + Legal TTT + Parallel Muon — val_bpb 1.1194
val_bpb = 1.1194 (3-seed mean, std 0.0006) | ~15.95 MB | 8×H100 SXM
3-Seed Results (8×H100 80GB SXM, PyTorch 2.9.1+cu128)
Key Innovation: LeakyReLU(0.5)²
One-line activation change delivering -0.003 BPB vs standard relu²:
Preserves negative gradient flow through the MLP. Source: PR #493 by @parinzee (ablated at -0.003), PR #518 by @sofiabod.
Legal TTT (Score-First, PR #461 Framework)
Every token scored BEFORE any weight update, enforced by
torch.inference_mode():Adapted from PR #461 by @Christopher-Lee-McClendon (changed freeze=2 → freeze=0 based on our ablation showing unfreezing all blocks is optimal at 3 epochs).
Total eval: ~530s (120s standard + 409s TTT) — within 10 min limit.
Training Architecture
PR #414 stack + Parameter Banking + Parallel Muon (PR #399):
Credits
🤖 Generated with Claude Code